Phelps County
Resonac creates 27-member consortium to pursue advanced chip developments
Resonac, a Japanese chip-materials maker, has announced the creation of Joint 3, which it describes as a consortium of 27 companies working together on semiconductor-related developments. "With next-generation technologies like generative AI and self-driving cars rapidly spreading, the technology required for semiconductors is getting more advanced and complex," Resonac CEO Hidehito Takahashi said Wednesday. Companies from a number of countries will be involved in Joint 3, which is led by Resonac. The list includes St Paul, Minnesota's 3M, Rolla, Missouri's Brewer Science, Sunnyvale, California's Synopsys and Singapore-headquartered, Hong Kong-listed ASMPT.
- North America > United States > Missouri > Phelps County > Rolla (0.34)
- North America > United States > Minnesota (0.34)
- North America > United States > California > Santa Clara County > Sunnyvale (0.34)
- (2 more...)
A Comprehensive Dataset for Underground Miner Detection in Diverse Scenario
Addy, Cyrus, Gurumadaiah, Ajay Kumar, Gao, Yixiang, Awuah-Offei, Kwame
Underground mining operations face significant safety challenges that make emergency response capabilities crucial. While robots have shown promise in assisting with search and rescue operations, their effectiveness depends on reliable miner detection capabilities. Deep learning algorithms offer potential solutions for automated miner detection, but require comprehensive training datasets, which are currently lacking for underground mining environments. This paper presents a novel thermal imaging dataset specifically designed to enable the development and validation of miner detection systems for potential emergency applications. We systematically captured thermal imagery of various mining activities and scenarios to create a robust foundation for detection algorithms. To establish baseline performance metrics, we evaluated several state-of-the-art object detection algorithms including YOLOv8, YOLOv10, YOLO11, and RT-DETR on our dataset. While not exhaustive of all possible emergency situations, this dataset serves as a crucial first step toward developing reliable thermal-based miner detection systems that could eventually be deployed in real emergency scenarios. This work demonstrates the feasibility of using thermal imaging for miner detection and establishes a foundation for future research in this critical safety application.
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
Enabling Heterogeneous Adversarial Transferability via Feature Permutation Attacks
Adversarial attacks in black-box settings are highly practical, with transfer-based attacks being the most effective at generating adversarial examples (AEs) that transfer from surrogate models to unseen target models. However, their performance significantly degrades when transferring across heterogeneous architectures -- such as CNNs, MLPs, and Vision Transformers (ViTs) -- due to fundamental architectural differences. To address this, we propose Feature Permutation Attack (FPA), a zero-FLOP, parameter-free method that enhances adversarial transferability across diverse architectures. FPA introduces a novel feature permutation (FP) operation, which rearranges pixel values in selected feature maps to simulate long-range dependencies, effectively making CNNs behave more like ViTs and MLPs. This enhances feature diversity and improves transferability both across heterogeneous architectures and within homogeneous CNNs. Extensive evaluations on 14 state-of-the-art architectures show that FPA achieves maximum absolute gains in attack success rates of 7.68% on CNNs, 14.57% on ViTs, and 14.48% on MLPs, outperforming existing black-box attacks. Additionally, FPA is highly generalizable and can seamlessly integrate with other transfer-based attacks to further boost their performance. Our findings establish FPA as a robust, efficient, and computationally lightweight strategy for enhancing adversarial transferability across heterogeneous architectures.
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Information Technology > Security & Privacy (0.37)
- Government > Military (0.37)
Deep ARTMAP: Generalized Hierarchical Learning with Adaptive Resonance Theory
Melton, Niklas M., da Silva, Leonardo Enzo Brito, Petrenko, Sasha, Wunsch, Donald. C. II
This paper presents Deep ARTMAP, a novel extension of the ARTMAP architecture that generalizes the self-consistent modular ART (SMART) architecture to enable hierarchical learning (supervised and unsupervised) across arbitrary transformations of data. The Deep ARTMAP framework operates as a divisive clustering mechanism, supporting an arbitrary number of modules with customizable granularity within each module. Inter-ART modules regulate the clustering at each layer, permitting unsupervised learning while enforcing a one-to-many mapping from clusters in one layer to the next. While Deep ARTMAP reduces to both ARTMAP and SMART in particular configurations, it offers significantly enhanced flexibility, accommodating a broader range of data transformations and learning modalities.
- North America > United States > Missouri > Phelps County > Rolla (0.05)
- Asia > Singapore (0.04)
Physics-Constrained Generative Artificial Intelligence for Rapid Takeoff Trajectory Design
To aid urban air mobility (UAM), electric vertical takeoff and landing (eVTOL) aircraft are being targeted. Conventional multidisciplinary analysis and optimization (MDAO) can be expensive, while surrogate-based optimization can struggle with challenging physical constraints. This work proposes physics-constrained generative adversarial networks (physicsGAN), to intelligently parameterize the takeoff control profiles of an eVTOL aircraft and to transform the original design space to a feasible space. Specifically, the transformed feasible space refers to a space where all designs directly satisfy all design constraints. The physicsGAN-enabled surrogate-based takeoff trajectory design framework was demonstrated on the Airbus A3 Vahana. The physicsGAN generated only feasible control profiles of power and wing angle in the feasible space with around 98.9% of designs satisfying all constraints. The proposed design framework obtained 99.6% accuracy compared with simulation-based optimal design and took only 2.2 seconds, which reduced the computational time by around 200 times. Meanwhile, data-driven GAN-enabled surrogate-based optimization took 21.9 seconds using a derivative-free optimizer, which was around an order of magnitude slower than the proposed framework. Moreover, the data-driven GAN-based optimization using gradient-based optimizers could not consistently find the optimal design during random trials and got stuck in an infeasible region, which is problematic in real practice. Therefore, the proposed physicsGAN-based design framework outperformed data-driven GAN-based design to the extent of efficiency (2.2 seconds), optimality (99.6% accurate), and feasibility (100% feasible). According to the literature review, this is the first physics-constrained generative artificial intelligence enabled by surrogate models.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > Japan > Honshū > Kansai > Wakayama Prefecture > Wakayama (0.04)
- Transportation > Air (1.00)
- Aerospace & Defense > Aircraft (1.00)
Extreme AutoML: Analysis of Classification, Regression, and NLP Performance
Ratner, Edward, Farmer, Elliot, Warner, Brandon, Douglas, Christopher, Lendasse, Amaury
Utilizing machine learning techniques has always required choosing hyperparameters. This is true whether one uses a classical technique such as a KNN or very modern neural networks such as Deep Learning. Though in many applications, hyperparameters are chosen by hand, automated methods have become increasingly more common. These automated methods have become collectively known as automated machine learning, or AutoML. Several automated selection algorithms have shown similar or improved performance over state-of-the-art methods. This breakthrough has led to the development of cloud-based services like Google AutoML, which is based on Deep Learning and is widely considered to be the industry leader in AutoML services. Extreme Learning Machines (ELMs) use a fundamentally different type of neural architecture, producing better results at a significantly discounted computational cost. We benchmark the Extreme AutoML technology against Google's AutoML using several popular classification data sets from the University of California at Irvine's (UCI) repository, and several other data sets, observing significant advantages for Extreme AutoML in accuracy, Jaccard Indices, the variance of Jaccard Indices across classes (i.e. class variance) and training times.
- North America > United States > California > Alameda County > Fremont (0.05)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- (6 more...)
- Research Report > Promising Solution (0.67)
- Research Report > New Finding (0.46)
- Health & Medicine (1.00)
- Media > Film (0.94)
- Government > Regional Government > North America Government > United States Government (0.93)
- Leisure & Entertainment (0.69)
LPLgrad: Optimizing Active Learning Through Gradient Norm Sample Selection and Auxiliary Model Training
Gul, Shreen, Elmahallawy, Mohamed, Madria, Sanjay, Tripathy, Ardhendu
Machine learning models are increasingly being utilized across various fields and tasks due to their outstanding performance and strong generalization capabilities. Nonetheless, their success hinges on the availability of large volumes of annotated data, the creation of which is often labor-intensive, time-consuming, and expensive. Many active learning (AL) approaches have been proposed to address these challenges, but they often fail to fully leverage the information from the core phases of AL, such as training on the labeled set and querying new unlabeled samples. To bridge this gap, we propose a novel AL approach, Loss Prediction Loss with Gradient Norm (LPLgrad), designed to quantify model uncertainty effectively and improve the accuracy of image classification tasks. LPLgrad operates in two distinct phases: (i) {\em Training Phase} aims to predict the loss for input features by jointly training a main model and an auxiliary model. Both models are trained on the labeled data to maximize the efficiency of the learning process, an aspect often overlooked in previous AL methods. This dual-model approach enhances the ability to extract complex input features and learn intrinsic patterns from the data effectively; (ii) {\em Querying Phase} that quantifies the uncertainty of the main model to guide sample selection. This is achieved by calculating the gradient norm of the entropy values for samples in the unlabeled dataset. Samples with the highest gradient norms are prioritized for labeling and subsequently added to the labeled set, improving the model's performance with minimal labeling effort. Extensive evaluations on real-world datasets demonstrate that the LPLgrad approach outperforms state-of-the-art methods by order of magnitude in terms of accuracy on a small number of labeled images, yet achieving comparable training and querying times in multiple image classification tasks.
- North America > United States > Washington > Benton County > Richland (0.04)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
FisherMask: Enhancing Neural Network Labeling Efficiency in Image Classification Using Fisher Information
Gul, Shreen, Elmahallawy, Mohamed, Madria, Sanjay, Tripathy, Ardhendu
Deep learning (DL) models are popular across various domains due to their remarkable performance and efficiency. However, their effectiveness relies heavily on large amounts of labeled data, which are often time-consuming and labor-intensive to generate manually. To overcome this challenge, it is essential to develop strategies that reduce reliance on extensive labeled data while preserving model performance. In this paper, we propose FisherMask, a Fisher information-based active learning (AL) approach that identifies key network parameters by masking them based on their Fisher information values. FisherMask enhances batch AL by using Fisher information to select the most critical parameters, allowing the identification of the most impactful samples during AL training. Moreover, Fisher information possesses favorable statistical properties, offering valuable insights into model behavior and providing a better understanding of the performance characteristics within the AL pipeline. Our extensive experiments demonstrate that FisherMask significantly outperforms state-of-the-art methods on diverse datasets, including CIFAR-10 and FashionMNIST, especially under imbalanced settings. These improvements lead to substantial gains in labeling efficiency. Hence serving as an effective tool to measure the sensitivity of model parameters to data samples. Our code is available on \url{https://github.com/sgchr273/FisherMask}.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Washington > Benton County > Richland (0.04)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- North America > Canada > Ontario > Toronto (0.04)
When Less Is Not More: Large Language Models Normalize Less-Frequent Terms with Lower Accuracy
Hier, Daniel B., Do, Thanh Son, Obafemi-Ajayi, Tayo
Term normalization is the process of mapping a term from free text to a standardized concept and its machine-readable code in an ontology. Accurate normalization of terms that capture phenotypic differences between patients and diseases is critical to the success of precision medicine initiatives. A large language model (LLM), such as GPT-4o, can normalize terms to the Human Phenotype Ontology (HPO), but it may retrieve incorrect HPO IDs. Reported accuracy rates for LLMs on these tasks may be inflated due to imbalanced test datasets skewed towards high-frequency terms. In our study, using a comprehensive dataset of 268,776 phenotype annotations for 12,655 diseases from the HPO, GPT-4o achieved an accuracy of 13.1% in normalizing 11,225 unique terms. However, the accuracy was unevenly distributed, with higher-frequency and shorter terms normalized more accurately than lower-frequency and longer terms. Feature importance analysis, using SHAP and permutation methods, identified low-term frequency as the most significant predictor of normalization errors. These findings suggest that training and evaluation datasets for LLM-based term normalization should balance low- and high-frequency terms to improve model performance, particularly for infrequent terms critical to precision medicine.
- North America > United States > Missouri > Greene County > Springfield (0.05)
- North America > United States > Missouri > Phelps County > Rolla (0.04)
CAV-AD: A Robust Framework for Detection of Anomalous Data and Malicious Sensors in CAV Networks
Rahman, Md Sazedur, Elmahallawy, Mohamed, Madria, Sanjay, Frimpong, Samuel
The adoption of connected and automated vehicles (CAVs) has sparked considerable interest across diverse industries, including public transportation, underground mining, and agriculture sectors. However, CAVs' reliance on sensor readings makes them vulnerable to significant threats. Manipulating these readings can compromise CAV network security, posing serious risks for malicious activities. Although several anomaly detection (AD) approaches for CAV networks are proposed, they often fail to: i) detect multiple anomalies in specific sensor(s) with high accuracy or F1 score, and ii) identify the specific sensor being attacked. In response, this paper proposes a novel framework tailored to CAV networks, called CAV-AD, for distinguishing abnormal readings amidst multiple anomaly data while identifying malicious sensors. Specifically, CAV-AD comprises two main components: i) A novel CNN model architecture called optimized omni-scale CNN (O-OS-CNN), which optimally selects the time scale by generating all possible kernel sizes for input time series data; ii) An amplification block to increase the values of anomaly readings, enhancing sensitivity for detecting anomalies. Not only that, but CAV-AD integrates the proposed O-OS-CNN with a Kalman filter to instantly identify the malicious sensors. We extensively train CAV-AD using real-world datasets containing both instant and constant attacks, evaluating its performance in detecting intrusions from multiple anomalies, which presents a more challenging scenario. Our results demonstrate that CAV-AD outperforms state-of-the-art methods, achieving an average accuracy of 98% and an average F1 score of 89\%, while accurately identifying the malicious sensors.
- North America > United States > Missouri > Phelps County > Rolla (0.04)
- North America > Canada > British Columbia (0.04)
- Materials > Metals & Mining (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.46)